23 results
TRYGVE HAAVELMO AND THE EMERGENCE OF CAUSAL CALCULUS
- Judea Pearl
-
- Journal:
- Econometric Theory / Volume 31 / Issue 1 / February 2015
- Published online by Cambridge University Press:
- 10 June 2014, pp. 152-179
-
- Article
- Export citation
-
Haavelmo was the first to recognize the capacity of economic models to guide policies. This paper describes some of the barriers that Haavelmo’s ideas have had (and still have) to overcome and lays out a logical framework that has evolved from Haavelmo’s insight and matured into a coherent and comprehensive account of the relationships between theory, data, and policy questions. The mathematical tools that emerge from this framework now enable investigators to answer complex policy and counterfactual questions using simple routines, some by mere inspection of the model’s structure. Several such problems are illustrated by examples, including misspecification tests, nonparametric identification, mediation analysis, and introspection. Finally, we observe that economists are largely unaware of the benefits that Haavelmo’s ideas bestow upon them and, to close this gap, we identify concrete recent advances in causal analysis that economists can utilize in research and education.
Nancy Cartwright on Hunting Causes - Hunting Causes and Using Them: Approaches in Philosophy and Economics, Nancy Cartwright. Cambridge University Press, 2008, x + 270 pages.
- Judea Pearl
-
- Journal:
- Economics & Philosophy / Volume 26 / Issue 1 / March 2010
- Published online by Cambridge University Press:
- 17 March 2010, pp. 69-77
-
- Article
- Export citation
Epilogue - The Art and Science of Cause and Effect
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 401-428
-
- Chapter
- Export citation
-
Summary
A public lecture delivered November 1996 as part of the UCLA Faculty Research Lectureship Program
The topic of this lecture is causality – namely, our awareness of what causes what in the world and why it matters.
Though it is basic to human thought, causality is a notion shrouded in mystery, controversy, and caution, because scientists and philosophers have had difficulties defining when one event truly causes another.
We all understand that the rooster's crow does not cause the sun to rise, but even this simple fact cannot easily be translated into a mathematical equation.
Today, I would like to share with you a set of ideas which I have found very useful in studying phenomena of this kind. These ideas have led to practical tools that I hope you will find useful on your next encounter with cause and effect.
It is hard to imagine anyone here who is not dealing with cause and effect.
Whether you are evaluating the impact of bilingual education programs or running an experiment on how mice distinguish food from danger or speculating about why Julius Caesar crossed the Rubicon or diagnosing a patient or predicting who will win the presidential election, you are dealing with a tangled web of cause–effect considerations.
The story that I am about to tell is aimed at helping researchers deal with the complexities of such considerations, and to clarify their meaning.
This lecture is divided into three parts.
I begin with a brief historical sketch of the difficulties that various disciplines have had with causation.
Next I outline the ideas that reduce or eliminate several of these historical difficulties.
Finally, in honor of my engineering background, I will show how these ideas lead to simple practical tools, which will be demonstrated in the areas of statistics and social science.
In the beginning, as far as we can tell, causality was not problematic.
The urge to ask why and the capacity to find causal explanations came very early in human development.
The Bible, for example, tells us that just a few hours after tasting from the tree of knowledge, Adam is already an expert in causal arguments.
9 - Probability of Causation: Interpretation and Identification
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 283-308
-
- Chapter
- Export citation
-
Summary
Come and let us cast lots to find out who is to blame for this ordeal.
Jonah 1:7Preface
Assessing the likelihood that one event was the cause of another guides much of what we understand about (and how we act in) the world. For example, according to common judicial standard, judgment in favor of the plaintiff should be made if and only if it is “more probable than not” that the defendant's action was the cause of the plaintiff's damage (or death). But causation has two faces, necessary and sufficient; which of the two have lawmakers meant us to consider? And how are we to evaluate their probabilities?
This chapter provides formal semantics for the probability that event x was a necessary or sufficient cause (or both) of another event y. We then explicate conditions under which the probability of necessary (or sufficient) causation can be learned from statistical data, and we show how data from both experimental and nonexperimental studies can be combined to yield information that neither kind of study alone can provide.
INTRODUCTION
The standard counterfactual definition of causation (i.e., that E would not have occurred were it not for C) captures the notion of “necessary cause.” Competing notions such as “sufficient cause” and “necessary and sufficient cause” are of interest in a number of applications, and these too can be given concise mathematical definitions in structural model semantics (Section 7.1). Although the distinction between necessary and sufficient causes goes back to J. S. Mill (1843), it has received semiformal explications only in the 1960s – via conditional probabilities (Good 1961) and logical implications (Mackie 1965; Rothman 1976). These explications suffer from basic semantical difficulties, and they do not yield procedures for computing probabilities of causes as those provided by the structural account (Sections 7.1.3 and 8.3).
In this chapter we explore the counterfactual interpretation of necessary and sufficient causes, illustrate the application of structural model semantics to the problem of identifying probabilities of causes, and present, by way of examples, new ways of estimating probabilities of causes from statistical data. Additionally, we argue that necessity and sufficiency are two distinct facets of causation and that both facets should take part in the construction of causal explanations.
Contents
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp vii-xiv
-
- Chapter
- Export citation
11 - Reflections, Elaborations, and Discussions with Readers
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 331-400
-
- Chapter
- Export citation
-
Summary
As X-rays are to the surgeon, graphs are for causation.
The authorIn this chapter, I reflect back on the material covered in Chapters 1 to 10, discuss issues that require further elaboration, introduce new results obtained in the past eight years, and answer questions of general interest posed to me by readers of the first edition. These range from clarification of specific passages in the text, to conceptual and philosophical issues concerning the controversial status of causation, how it is taught in classrooms and how it is treated in textbooks and research articles.
The discussions follow roughly the order in which these issues are presented in the book, with section numbers indicating the corresponding chapters.
CAUSAL, STATISTICAL, AND GRAPHICAL VOCABULARY
Is the Causal–Statistical Dichotomy Necessary? Question to Author (from many readers)
Chapter 1 (Section 1.5) insists on a sharp distinction between statistical and causal concepts; the former are definable in terms of a joint distribution function (of observed variables), the latter are not. Considering that many concepts which the book classifies as “causal” (e.g., “randomization,” “confounding,” and “instrumental variables”) are commonly discussed in the statistical literature, is this distinction crisp? Is it necessary? Is it useful?
Author Answer
The distinction is crisp, necessary, and useful, and, as I tell audiences in all my lectures: “If you get nothing out of this lecture except the importance of keeping statistical and causal concepts apart, I would consider it a success.” Here, I would dare go even further:
“If I am remembered for no other contribution except for insisting on the causal–statistical distinction, I would consider my scientific work worthwhile.”
The distinction is embarrassingly crisp and simple, because it is based on the fundamental distinction between statics and kinematics. Standard statistical analysis, typified by regression, estimation, and hypothesis-testing techniques, aims to assess parameters of a static distribution from samples drawn of that distribution. With the help of such parameters, one can infer associations among variables, estimate the likelihood of past and future events, as well as update the likelihood of events in light of new evidence or new measurements. These tasks are managed well by standard statistical analysis so long as experimental conditions remain the same.
Frontmatter
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp i-iv
-
- Chapter
- Export citation
Name Index
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 454-459
-
- Chapter
- Export citation
10 - The Actual Cause
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 309-330
-
- Chapter
- Export citation
-
Summary
And now remains
That we find out the cause of this effect,
Or rather say, the cause of this defect,
For this effect defective comes by cause.
Shakespeare (Hamlet II.ii. 100–4)Preface
This chapter offers a formal explication of the notion of “actual cause,” an event recognized as responsible for the production of a given outcome in a specific scenario, as in: “Socrates drinking hemlock was the actual cause of Socrates death.” Human intuition is extremely keen in detecting and ascertaining this type of causation and hence is considered the key to constructing explanations (Section 7.2.3) and the ultimate criterion (known as “cause in fact”) for determining legal responsibility.
Yet despite its ubiquity in natural thoughts, actual causation is not an easy concept to formulate. A typical example (introduced by Wright 1988) considers two fires advancing toward a house. If fire A burned the house before fire B, we (and many juries nationwide) would surely consider fire A “the actual cause” of the damage, though either fire alone is sufficient (and neither one was necessary) for burning the house. Clearly, actual causation requires information beyond that of necessity and sufficiency; the actual process mediating between the cause and the effect must enter into consideration. But what precisely is a “process” in the language of structural models? What aspects of causal processes define actual causation? How do we piece together evidence about the uncertain aspects of a scenario and so compute probabilities of actual causation?
In this chapter we propose a plausible account of actual causation that can be formulated in structural model semantics. The account is based on the notion of sustenance, to be defined in Section 10.2, which combines aspects of necessity and sufficiency to measure the capacity of the cause to maintain the effect despite certain structural changes in the model. We show by examples how this account avoids problems associated with the counterfactual dependence account of Lewis (1986) and how it can be used both in generating explanations of specific scenarios and in computing the probabilities that such explanations are in fact correct.
1 - Introduction to Probabilities, Graphs, and Causal Models
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 1-40
-
- Chapter
- Export citation
-
Summary
Chance gives rise to thoughts, and chance removes them.
Pascal (1670)INTRODUCTION TO PROBABILITY THEORY
Why Probabilities?
Causality connotes lawlike necessity, whereas probabilities connote exceptionality, doubt, and lack of regularity. Still, there are two compelling reasons for starting with, and in fact stressing, probabilistic analysis of causality; one is fairly straightforward, the other more subtle.
The simple reason rests on the observation that causal utterances are often used in situations that are plagued with uncertainty. We say, for example, “reckless driving causes accidents” or “you will fail the course because of your laziness” (Suppes 1970), knowing quite well that the antecedents merely tend to make the consequences more likely, not absolutely certain. Any theory of causality that aims at accommodating such utterances must therefore be cast in a language that distinguishes various shades of likelihood – namely, the language of probabilities. Connected with this observation, we note that probability theory is currently the official mathematical language of most disciplines that use causal modeling, including economics, epidemiology, sociology, and psychology. In these disciplines, investigators are concerned not merely with the presence or absence of causal connections but also with the relative strengths of those connections and with ways of inferring those connections from noisy observations. Probability theory, aided by methods of statistical analysis, provides both the principles and the means of coping with – and drawing inferences from – such observations.
The more subtle reason concerns the fact that even the most assertive causal expressions in natural language are subject to exceptions, and those exceptions may cause major difficulties if processed by standard rules of deterministic logic. Consider, for example, the two plausible premises:
1. My neighbor's roof gets wet whenever mine does.
2. If I hose my roof it will get wet.
Taken literally, these two premises imply the implausible conclusion that my neighbor's roof gets wet whenever I hose mine.
Such paradoxical conclusions are normally attributed to the finite granularity of our language, as manifested in the many exceptions that are implicit in premise 1. Indeed, the paradox disappears once we take the trouble of explicating those exceptions and write, for instance:
Dedication
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp v-vi
-
- Chapter
- Export citation
Causality
- 2nd edition
- Judea Pearl
-
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009
-
Written by one of the preeminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, economics, philosophy, cognitive science, and the health and social sciences. Judea Pearl presents and unifies the probabilistic, manipulative, counterfactual, and structural approaches to causation and devises simple mathematical tools for studying the relationships between causal connections and statistical associations. Cited in more than 2,100 scientific publications, it continues to liberate scientists from the traditional molds of statistical thinking. In this revised edition, Judea Pearl elucidates thorny issues, answers readers' questions, and offers a panoramic view of recent advances in this field of research. Causality will be of interest to students and professionals in a wide variety of fields. Dr Judea Pearl has received the 2011 Rumelhart Prize for his leading research in Artificial Intelligence (AI) and systems from The Cognitive Science Society.
8 - Imperfect Experiments: Bounding Effects and Counterfactuals
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 259-282
-
- Chapter
- Export citation
-
Summary
Would that I could discover truth as easily as I can uncover falsehood.
Cicero (44 B.C.)Preface
In this chapter we describe how graphical and counterfactual models (Sections 3.2 and 7.1) can combine to elicit causal information from imperfect experiments: experiments that deviate from the ideal protocol of randomized control. A common deviation occurs, for example, when subjects in a randomized clinical trial do not fully comply with their assigned treatment, thus compromising the identification of causal effects. When conditions for identification are not met, the best one can do is derive bounds for the quantities of interest – namely, a range of possible values that represents our ignorance about the data-generating process and that cannot be improved with increasing sample size. This chapter demonstrates (i) that such bounds can be derived by simple algebraic methods, (ii) that, despite the imperfection of the experiments, the derived bounds can yield significant and sometimes accurate information on the impact of a policy on the entire population as well as on a particular individual in the study, and (iii) that prior knowledge can be harnessed effectively to obtain Bayesian estimates of those impacts.
INTRODUCTION
Imperfect and Indirect Experiments
Standard experimental studies in the biological, medical, and behavioral sciences invariably invoke the instrument of randomized control; that is, subjects are assigned at random to various groups (or treatments or programs), and the mean differences between participants in different groups are regarded as measures of the efficacies of the associated programs. Deviations from this ideal setup may take place either by failure to meet any of the experimental requirements or by deliberate attempts to relax these requirements. Indirect experiments are studies in which randomized control is either unfeasible or undesirable. In such experiments, subjects are still assigned at random to various groups, but members of each group are simply encouraged (rather than forced) to participate in the program associated with the group; it is up to the individuals to select among the programs.
Recently, use of strict randomization in social and medical experimentation has been questioned for three major reasons.
1. Perfect control is hard to achieve or ascertain. Studies in which treatment is assumed to be randomized may be marred by uncontrolled imperfect compliance. For example, subjects experiencing adverse reactions to an experimental drug may decide to reduce the assigned dosage.
2 - A Theory of Inferred Causation
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 41-64
-
- Chapter
- Export citation
-
Summary
I would rather discover one causal law than be King of Persia.
Democritus (460–370 B.C.)Preface
The possibility of learning causal relationships from raw data has been on philosophers’ dream lists since the time of Hume (1711–1776). That possibility entered the realm of formal treatment and feasible computation in the mid-1980s, when the mathematical relationships between graphs and probabilistic dependencies came to light. The approach described herein is an outgrowth of Rebane and Pearl (1987) and Pearl (1988b, Chap. 8), which describes how causal relationships can be inferred from nontemporal statistical data if one makes certain assumptions about the underlying process of data generation (e.g., that it has a tree structure). The prospect of inferring causal relationships from weaker structural assumptions (e.g., general directed acyclic graphs) has motivated parallel research efforts at three universities: UCLA, Carnegie Mellon University (CMU), and Stanford. The UCLA and CMU teams pursued an approach based on searching the data for patterns of conditional independencies that reveal fragments of the underlying structure and then piecing those fragments together to form a coherent causal model (or a set of such models). The Stanford group pursued a Bayesian approach, where data are used to update prior probabilities assigned to candidate causal structures (Cooper and Herskovits 1991). The UCLA and CMU efforts have led to similar theories and almost identical discovery algorithms, which were implemented in the TETRAD II program (Spirtes et al. 1993). The Bayesian approach has since been pursued by a number of research teams (Singh and Valtorta 1995; Heckerman et al. 1994) and now serves as the basis for several graph-based learning methods (Jordan 1998). This chapter describes the approach pursued by Tom Verma and me in the period 1988–1992, and it briefly summarizes related extensions, refinements, and improvements that have been advanced by the CMU team and others. Some of the philosophical rationale behind this development, primarily the assumption of minimality, are implicit in the Bayesian approach as well (Section 2.9.1).
The basic idea of automating the discovery of causes – and the specific implementation of this idea in computer programs – came under fierce debate in a number of forums (Cartwright 1995a; Humphreys and Freedman 1996; Cartwright 1999; Korb and Wallace 1997; McKim and Turner 1997; Robins and Wasserman 1999). Selected aspects of this debate will be addressed in the discussion section at the end of this chapter (Section 2.9.1).
4 - Actions, Plans, and Direct Effects
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 107-132
-
- Chapter
- Export citation
-
Summary
He whose actions exceed his wisdom, his wisdom shall endure.
Rabbi Hanina ben Dosa (1st century A.D.)Preface
So far, our analysis of causal effects has focused on primitive interventions of the form do(x), which stood for setting the value of variable X to a fixed constant, x, and asking for the effect of this action on the probabilities of some response variables Y. In this chapter we introduce several extensions of this analysis.
First (Section 4.1), we discuss the status of actions vis-à-vis observations in probability theory, decision analysis, and causal modeling, and we advance the thesis that the main role of causal models is to facilitate the evaluation of the effect of novel actions and policies that were unanticipated during the construction of the model.
In Section 4.2 we extend the identification analysis of Chapter 3 to conditional actions of the form “do x if you see z” and stochastic policies of the form “do x with probability p if you see z.” We shall see that the evaluation and identification of these more elaborate interventions can be obtained from the analysis of primitive interventions. In Section 4.3, we use the intervention calculus developed in Chapter 3 to give a graphical characterization of a set of semi-Markovian models for which the causal effect of one variable on another can be identified.
We address in Section 4.4 the problem of evaluating the effect of sequential plans – namely, sequences of time-varying actions (some taken concurrently) designed to produce a certain outcome. We provide a graphical method of estimating the effect of such plans from nonexperimental studies in which some of the actions are influenced by observations and former actions, some observations are influenced by the actions, and some confounding variables are unmeasured. We show that there is substantial advantage to analyzing a plan into its constituent actions rather than treating the set of actions as a single entity.
Finally, in Section 4.5 we address the question of distinguishing direct from indirect effects. We show that direct effects can be identified by the graphical method developed in Section 4.4. An example using alleged sex discrimination in college admission will serve to demonstrate the assumptions needed for proper analysis of direct effects.
5 - Causality and Structural Models in Social Science and Economics
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 133-172
-
- Chapter
- Export citation
-
Summary
Do two men travel together unless they have agreed?
Amos 3:3Preface
Structural equation modeling (SEM) has dominated causal analysis in economics and the social sciences since the 1950s, yet the prevailing interpretation of SEM differs substantially from the one intended by its originators and also from the one expounded in this book. Instead of carriers of substantive causal information, structural equations are often interpreted as carriers of probabilistic information; economists view them as convenient representations of density functions, and social scientists see them as summaries of covariance matrices. The result has been that many SEM researchers have difficulty articulating the causal content of SEM, and the most distinctive capabilities of SEM are currently ill understood and underutilized.
This chapter is written with the ambitious goal of reinstating the causal interpretation of SEM. We shall demonstrate how developments in the areas of graphical models and the logic of intervention can alleviate the current difficulties and thus revitalize structural equations as the primary language of causal modeling. Toward this end, we recast several of the results of Chapters 3 and 4 in parametric form (the form most familiar to SEM researchers) and demonstrate how practical and conceptual issues of model testing and parameter identification can be illuminated through graphical methods. We then move back to nonparametric analysis, from which an operational semantics will evolve that offers a coherent interpretation of what structural equations are all about (Section 5.4). In particular, we will provide answers to the following fundamental questions: What do structural equations claim about the world? What portion of those claims is testable? Under what conditions can we estimate structural parameters through regression analysis?
In Section 5.1 we survey the history of SEM and suggest an explanation for the current erosion of its causal interpretation. The testable implications of structural models are explicated in Section 5.2. For recursive models (herein termed Markovian), we find that the statistical content of a structural model can be fully characterized by a set of zero partial correlations that are entailed by the model. These zero partial correlations can be read off the graph using the d-separation criterion, which in linear models applies to graphs with cycles and correlated errors as well (Section 5.2).
3 - Causal Diagrams and the Identification of Causal Effects
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 65-106
-
- Chapter
- Export citation
-
Summary
The eye obeys exactly the action of the mind.
Emerson (1860)Preface
In the previous chapter we dealt with ways of learning causal relationships from raw data. In this chapter we explore the ways of inferring such relationships from a combination of data and qualitative causal assumptions that are deemed plausible in a given domain. More broadly, this chapter aims to help researchers communicate qualitative assumptions about cause–effect relationships, elucidate the ramifications of such assumptions, and derive causal inferences from a combination of assumptions, experiments, and data. Our major task will be to decide whether the assumptions given are sufficient for assessing the strength of causal effects from nonexperimental data.
Causal effects permit us to predict how systems would respond to hypothetical interventions – for example, policy decisions or actions performed in everyday activity. As we have seen in Chapter 1 (Section 1.3), such predictions are the hallmark of the empirical sciences and are not discernible from probabilistic information alone; they rest on – and, in fact, define – causal relationships. This chapter uses causal diagrams to give formal semantics to the notion of intervention, and it provides explicit formulas for postintervention probabilities in terms of preintervention probabilities. The implication is that the effects of every intervention can be estimated from nonexperimental data, provided the data is supplemented with a causal diagram that is both acyclic and contains no latent variables.
If some variables are not measured then the question of identifiability arises, and this chapter develops a nonparametric framework for analyzing the identification of causal relationships in general and causal effects in particular. We will see that causal diagrams provide a powerful mathematical tool in this analysis; they can be queried, using extremely simple tests, to determine if the assumptions available are sufficient for identifying causal effects. If so, the diagrams produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained.
Another tool that emerges from the graphical analysis of causal effects is a calculus of interventions – a set of inference rules by which sentences involving interventions and observations can be transformed into other such sentences, thus providing a syntactic method of deriving (or verifying) claims about interventions and the way they interact with observations.
7 - The Logic of Structure-Based Counterfactuals
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 201-258
-
- Chapter
- Export citation
-
Summary
And the Lord said,
“If I find in the city of Sodom fifty good men,
I will pardon the whole place for their sake.”
Genesis 18:26Preface
This chapter provides a formal analysis of structure-based counterfactuals, a concept introduced briefly in Chapters 1 and 3 that will occupy the rest of our discussion in this book. Through this analysis, we will obtain sharper mathematical definitions of other concepts that were introduced in earlier chapters, including causal models, action, causal effects, causal relevance, error terms, and exogeneity.
After casting the concepts of causal model and counterfactuals in formal mathematical terms, we will demonstrate by examples how counterfactual questions can be answered from both deterministic and probabilistic causal models (Section 7.1). In Section 7.2.1, we will argue that policy analysis is an exercise in counterfactual reasoning and demonstrate this thesis in a simple example taken from econometrics. This will set the stage for our discussion in Section 7.2.2, where we explicate the empirical content of counterfactuals in terms of policy predictions. Section 7.2.3 discusses the role of counterfactuals in the interpretation and generation of causal explanations. Section 7.2 concludes with discussions of how causal relationships emerge from actions and mechanisms (Section 7.2.4) and how causal directionality can be induced from a set of symmetric equations (Section 7.2.5).
In Section 7.3 we develop an axiomatic characterization of counterfactual and causal relevance relationships as they emerge from the structural model semantics. Section 7.3.1 will identify a set of properties, or axioms, that allow us to derive new counterfactual relations from assumptions, and Section 7.3.2 demonstrates the use of these axioms in algebraic derivation of causal effects. Section 7.3.3 introduces axioms for the relationship of causal relevance and, using their similarity to the axioms of graphs, describes the use of graphs for verifying relevance relationships.
The axiomatic characterization developed in Section 7.3 enables us to compare structural models with other approaches to causality and counterfactuals, most notably those based on Lewis's closest-world semantics (Sections 7.4.1–7.4.4). The formal equivalence of the structural approach and the Neyman–Rubin potential-outcome framework is discussed in Section 7.4.4. Finally, we revisit the topic of exogeneity and extend our discussion of Section 5.4.3 with counterfactual definitions of exogenous and instrumental variables in Section 7.4.5.
Bibliography
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 429-453
-
- Chapter
- Export citation
Subject Index
- Judea Pearl, University of California, Los Angeles
-
- Book:
- Causality
- Published online:
- 05 March 2013
- Print publication:
- 14 September 2009, pp 460-465
-
- Chapter
- Export citation